Goto

Collaborating Authors

 fooling neural network interpretation


Fooling Neural Network Interpretations via Adversarial Model Manipulation

Neural Information Processing Systems

We ask whether the neural network interpretation methods can be fooled via adversarial model manipulation, which is defined as a model fine-tuning step that aims to radically alter the explanations without hurting the accuracy of the original models, e.g., VGG19, ResNet50, and DenseNet121. By incorporating the interpretation results directly in the penalty term of the objective function for fine-tuning, we show that the state-of-the-art saliency map based interpreters, e.g., LRP, Grad-CAM, and SimpleGrad, can be easily fooled with our model manipulation. We propose two types of fooling, Passive and Active, and demonstrate such foolings generalize well to the entire validation set as well as transfer to other interpretation methods. Our results are validated by both visually showing the fooled explanations and reporting quantitative metrics that measure the deviations from the original explanations. We claim that the stability of neural network interpretation method with respect to our adversarial model manipulation is an important criterion to check for developing robust and reliable neural network interpretation method.


Reviews: Fooling Neural Network Interpretations via Adversarial Model Manipulation

Neural Information Processing Systems

Originality: as far as I am aware, the idea of adversarial *model* manipulation is a new one, and their citation of related work, e.g. Quality: although I have confidence that the submission is technically sound, I think the experiments are insufficient and missing important categories of model/explanation method. I elaborate on this below. Clarity: the paper seems fairly clearly written and I'm confident that expert readers could reproduce its results. Significance: I think the strongest selling point of the work is the core idea -- adversarial model manipulation might have significant practical implications.


Reviews: Fooling Neural Network Interpretations via Adversarial Model Manipulation

Neural Information Processing Systems

This work addresses one of the most important problems in AI today and the explainability of AI systems becomes more important as we have more systems that interact with people. On the negative side, the examples the authors provided in their submission are far from being sufficient. Discrimination should be demonstrated on real life cases, as the authors presented in their rebuttal (on a small scale problem). While images are insightful, unfortunately they are not aligned with the paper's motivation and main insight and this makes the paper significantly weaker --- this is especially important since parole decisions or credit underwriting decisions do not use these deep learning architectures and the attacks and defenses will be quite different. To conclude, we agree that this submission is a teaser that yet need to be proved, but we prefer to see such a work at NeurIPS.


Fooling Neural Network Interpretations via Adversarial Model Manipulation

Neural Information Processing Systems

We ask whether the neural network interpretation methods can be fooled via adversarial model manipulation, which is defined as a model fine-tuning step that aims to radically alter the explanations without hurting the accuracy of the original models, e.g., VGG19, ResNet50, and DenseNet121. By incorporating the interpretation results directly in the penalty term of the objective function for fine-tuning, we show that the state-of-the-art saliency map based interpreters, e.g., LRP, Grad-CAM, and SimpleGrad, can be easily fooled with our model manipulation. We propose two types of fooling, Passive and Active, and demonstrate such foolings generalize well to the entire validation set as well as transfer to other interpretation methods. Our results are validated by both visually showing the fooled explanations and reporting quantitative metrics that measure the deviations from the original explanations. We claim that the stability of neural network interpretation method with respect to our adversarial model manipulation is an important criterion to check for developing robust and reliable neural network interpretation method.


Fooling Neural Network Interpretations via Adversarial Model Manipulation

Heo, Juyeon, Joo, Sunghwan, Moon, Taesup

Neural Information Processing Systems

We ask whether the neural network interpretation methods can be fooled via adversarial model manipulation, which is defined as a model fine-tuning step that aims to radically alter the explanations without hurting the accuracy of the original models, e.g., VGG19, ResNet50, and DenseNet121. By incorporating the interpretation results directly in the penalty term of the objective function for fine-tuning, we show that the state-of-the-art saliency map based interpreters, e.g., LRP, Grad-CAM, and SimpleGrad, can be easily fooled with our model manipulation. We propose two types of fooling, Passive and Active, and demonstrate such foolings generalize well to the entire validation set as well as transfer to other interpretation methods. Our results are validated by both visually showing the fooled explanations and reporting quantitative metrics that measure the deviations from the original explanations. We claim that the stability of neural network interpretation method with respect to our adversarial model manipulation is an important criterion to check for developing robust and reliable neural network interpretation method.


Fooling Neural Network Interpretations via Adversarial Model Manipulation

Heo, Juyeon, Joo, Sunghwan, Moon, Taesup

arXiv.org Machine Learning

We ask whether the neural network interpretation methods can be fooled via adversarial model manipulation, which is defined as a model fine-tuning step that aims to radically alter the explanations without hurting the accuracy of the original model. By incorporating the interpretation results directly in the regularization term of the objective function for fine-tuning, we show that the state-of-the-art interpreters, e.g., LRP and Grad-CAM, can be easily fooled with our model manipulation. We propose two types of fooling, passive and active, and demonstrate such foolings generalize well to the entire validation set as well as transfer to other interpretation methods. Our results are validated by both visually showing the fooled explanations and reporting quantitative metrics that measure the deviations from the original explanations. We claim that the stability of neural network interpretation method with respect to our adversarial model manipulation is an important criterion to check for developing robust and reliable neural network interpretation method.